21 research outputs found
Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge
This paper presents a state-of-the-art model for visual question answering
(VQA), which won the first place in the 2017 VQA Challenge. VQA is a task of
significant importance for research in artificial intelligence, given its
multimodal nature, clear evaluation protocol, and potential real-world
applications. The performance of deep neural networks for VQA is very dependent
on choices of architectures and hyperparameters. To help further research in
the area, we describe in detail our high-performing, though relatively simple
model. Through a massive exploration of architectures and hyperparameters
representing more than 3,000 GPU-hours, we identified tips and tricks that lead
to its success, namely: sigmoid outputs, soft training targets, image features
from bottom-up attention, gated tanh activations, output embeddings initialized
using GloVe and Google Images, large mini-batches, and smart shuffling of
training data. We provide a detailed analysis of their impact on performance to
assist others in making an appropriate selection.Comment: Winner of the 2017 Visual Question Answering (VQA) Challenge at CVP
Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup
Mixup is a highly successful technique to improve generalization of neural
networks by augmenting the training data with combinations of random pairs.
Selective mixup is a family of methods that apply mixup to specific pairs, e.g.
only combining examples across classes or domains. These methods have claimed
remarkable improvements on benchmarks with distribution shifts, but their
mechanisms and limitations remain poorly understood.
We examine an overlooked aspect of selective mixup that explains its success
in a completely new light. We find that the non-random selection of pairs
affects the training distribution and improve generalization by means
completely unrelated to the mixing. For example in binary classification, mixup
across classes implicitly resamples the data for a uniform class distribution -
a classical solution to label shift. We show empirically that this implicit
resampling explains much of the improvements in prior work. Theoretically,
these results rely on a regression toward the mean, an accidental property that
we identify in several datasets.
We have found a new equivalence between two successful methods: selective
mixup and resampling. We identify limits of the former, confirm the
effectiveness of the latter, and find better combinations of their respective
benefits
Visual Question Answering: A Survey of Methods and Datasets
Visual Question Answering (VQA) is a challenging task that has received
increasing attention from both the computer vision and the natural language
processing communities. Given an image and a question in natural language, it
requires reasoning over visual elements of the image and general knowledge to
infer the correct answer. In the first part of this survey, we examine the
state of the art by comparing modern approaches to the problem. We classify
methods by their mechanism to connect the visual and textual modalities. In
particular, we examine the common approach of combining convolutional and
recurrent neural networks to map images and questions to a common feature
space. We also discuss memory-augmented and modular architectures that
interface with structured knowledge bases. In the second part of this survey,
we review the datasets available for training and evaluating VQA systems. The
various datatsets contain questions at different levels of complexity, which
require different capabilities and types of reasoning. We examine in depth the
question/answer pairs from the Visual Genome project, and evaluate the
relevance of the structured annotations of images with scene graphs for VQA.
Finally, we discuss promising future directions for the field, in particular
the connection to structured knowledge bases and the use of natural language
processing models.Comment: 25 page
On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
Out-of-distribution (OOD) testing is increasingly popular for evaluating a
machine learning system's ability to generalize beyond the biases of a training
set. OOD benchmarks are designed to present a different joint distribution of
data and labels between training and test time. VQA-CP has become the standard
OOD benchmark for visual question answering, but we discovered three troubling
practices in its current use. First, most published methods rely on explicit
knowledge of the construction of the OOD splits. They often rely on
``inverting'' the distribution of labels, e.g. answering mostly 'yes' when the
common training answer is 'no'. Second, the OOD test set is used for model
selection. Third, a model's in-domain performance is assessed after retraining
it on in-domain splits (VQA v2) that exhibit a more balanced distribution of
labels. These three practices defeat the objective of evaluating
generalization, and put into question the value of methods specifically
designed for this dataset. We show that embarrassingly-simple methods,
including one that generates answers at random, surpass the state of the art on
some question types. We provide short- and long-term solutions to avoid these
pitfalls and realize the benefits of OOD evaluation
Unshuffling Data for Improved Generalization
Generalization beyond the training distribution is a core challenge in
machine learning. The common practice of mixing and shuffling examples when
training neural networks may not be optimal in this regard. We show that
partitioning the data into well-chosen, non-i.i.d. subsets treated as multiple
training environments can guide the learning of models with better
out-of-distribution generalization. We describe a training procedure to capture
the patterns that are stable across environments while discarding spurious
ones. The method makes a step beyond correlation-based learning: the choice of
the partitioning allows injecting information about the task that cannot be
otherwise recovered from the joint distribution of the training data. We
demonstrate multiple use cases with the task of visual question answering,
which is notorious for dataset biases. We obtain significant improvements on
VQA-CP, using environments built from prior knowledge, existing meta data, or
unsupervised clustering. We also get improvements on GQA using annotations of
"equivalent questions", and on multi-dataset training (VQA v2 / Visual Genome)
by treating them as distinct environments